bayesflow.summary_networks module#

class bayesflow.summary_networks.Bidirectional(*args, **kwargs)[source]#

Bases: Wrapper

Bidirectional wrapper for RNNs.

Args:
layer: keras.layers.RNN instance, such as keras.layers.LSTM or

keras.layers.GRU. It could also be a keras.layers.Layer instance that meets the following criteria: 1. Be a sequence-processing layer (accepts 3D+ inputs). 2. Have a go_backwards, return_sequences and return_state

attribute (with the same semantics as for the RNN class).

  1. Have an input_spec attribute.

4. Implement serialization via get_config() and from_config(). Note that the recommended way to create new RNN layers is to write a custom RNN cell and use it with keras.layers.RNN, instead of subclassing keras.layers.Layer directly. - When the returns_sequences is true, the output of the masked timestep will be zero regardless of the layer’s original zero_output_for_mask value.

merge_mode: Mode by which outputs of the forward and backward RNNs will be

combined. One of {‘sum’, ‘mul’, ‘concat’, ‘ave’, None}. If None, the outputs will not be combined, they will be returned as a list. Default value is ‘concat’.

backward_layer: Optional keras.layers.RNN, or keras.layers.Layer

instance to be used to handle backwards input processing. If backward_layer is not provided, the layer instance passed as the layer argument will be used to generate the backward layer automatically. Note that the provided backward_layer layer should have properties matching those of the layer argument, in particular it should have the same values for stateful, return_states, return_sequences, etc. In addition, backward_layer and layer should have different go_backwards argument values. A ValueError will be raised if these requirements are not met.

Call arguments:
The call arguments for this layer are the same as those of the wrapped RNN

layer.

Beware that when passing the initial_state argument during the call of this layer, the first half in the list of elements in the initial_state list will be passed to the forward RNN call and the last half in the list of elements will be passed to the backward RNN call.

Raises:
ValueError:
  1. If layer or backward_layer is not a Layer instance.

  2. In case of invalid merge_mode argument.

  3. If backward_layer has mismatched properties compared to layer.

Examples:

```python model = Sequential() model.add(Bidirectional(LSTM(10, return_sequences=True),

input_shape=(5, 10)))

model.add(Bidirectional(LSTM(10))) model.add(Dense(5)) model.add(Activation(‘softmax’)) model.compile(loss=’categorical_crossentropy’, optimizer=’rmsprop’)

# With custom backward layer model = Sequential() forward_layer = LSTM(10, return_sequences=True) backward_layer = LSTM(10, activation=’relu’, return_sequences=True,

go_backwards=True)

model.add(Bidirectional(forward_layer, backward_layer=backward_layer,

input_shape=(5, 10)))

model.add(Dense(5)) model.add(Activation(‘softmax’)) model.compile(loss=’categorical_crossentropy’, optimizer=’rmsprop’) ```

__call__(inputs, initial_state=None, constants=None, **kwargs)[source]#

Bidirectional.__call__ implements the same API as the wrapped RNN.

property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

build(input_shape)[source]#

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Args:
input_shape: Instance of TensorShape, or list of instances of

TensorShape if the layer expects a list of inputs (one instance per input).

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(inputs, training=None, mask=None, initial_state=None, constants=None)[source]#

Bidirectional.call implements the same API as the wrapped RNN.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_mask(inputs, mask)[source]#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

property constraints#
count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

classmethod from_config(config, custom_objects=None)[source]#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_config()[source]#

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weights()#

Returns the current weights of the layer, as NumPy arrays.

The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Returns:

Weights values as a list of NumPy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

property metrics#

List of metrics attached to the layer.

Returns:

A list of Metric objects.

property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

reset_states(states=None)[source]#
save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property stateful#
property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.DeepSet(*args, **kwargs)[source]#

Bases: Model

Implements a deep permutation-invariant network according to [1] and [2].

[1] Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., & Smola, A. J. (2017). Deep sets. Advances in neural information processing systems, 30.

[2] Bloem-Reddy, B., & Teh, Y. W. (2020). Probabilistic Symmetries and Invariant Neural Networks. J. Mach. Learn. Res., 21, 90-1.

Creates a stack of ‘num_equiv’ equivariant layers followed by a final invariant layer.

Parameters:
summary_dimint, optional, default: 10

The number of learned summary statistics.

num_dense_s1int, optional, default: 2

The number of dense layers in the inner function of a deep set.

num_dense_s2int, optional, default: 2

The number of dense layers in the outer function of a deep set.

num_dense_s3int, optional, default: 2

The number of dense layers in an equivariant layer.

num_equivint, optional, default: 2

The number of equivariant layers in the network.

dense_s1_argsdict or None, optional, default: None

The arguments for the dense layers of s1 (inner, pre-pooling function). If None, defaults will be used (see default_settings). Otherwise, all arguments for a tf.keras.layers.Dense layer are supported.

dense_s2_argsdict or None, optional, default: None

The arguments for the dense layers of s2 (outer, post-pooling function). If None, defaults will be used (see default_settings). Otherwise, all arguments for a tf.keras.layers.Dense layer are supported.

dense_s3_argsdict or None, optional, default: None

The arguments for the dense layers of s3 (equivariant function). If None, defaults will be used (see default_settings). Otherwise, all arguments for a tf.keras.layers.Dense layer are supported.

pooling_funstr of callable, optional, default: ‘mean’

If string argument provided, should be one in [‘mean’, ‘max’]. In addition, ac actual neural network can be passed for learnable pooling.

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the __init__() method of tf.keras.Model.

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs the forward pass of a learnable deep invariant transformation consisting of a sequence of equivariant transforms followed by an invariant transform.

Parameters:
xtf.Tensor

Input of shape (batch_size, n_obs, data_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, out_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.Dense(*args, **kwargs)[source]#

Bases: Layer

Just your regular densely-connected NN layer.

Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True). These are all attributes of Dense.

Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 0 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have shape (batch_size, d0, units).

Besides, layer attributes cannot be modified after the layer has been called once (except the trainable attribute). When a popular kwarg input_shape is passed, then keras will create an input layer to insert before the current layer. This can be treated equivalent to explicitly defining an InputLayer.

Example:

>>> # Create a `Sequential` model and add a Dense layer as the first layer.
>>> model = tf.keras.models.Sequential()
>>> model.add(tf.keras.Input(shape=(16,)))
>>> model.add(tf.keras.layers.Dense(32, activation='relu'))
>>> # Now the model will take as input arrays of shape (None, 16)
>>> # and output arrays of shape (None, 32).
>>> # Note that after the first layer, you don't need to specify
>>> # the size of the input anymore:
>>> model.add(tf.keras.layers.Dense(32))
>>> model.output_shape
(None, 32)
Args:

units: Positive integer, dimensionality of the output space. activation: Activation function to use.

If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x).

use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix. bias_initializer: Initializer for the bias vector. kernel_regularizer: Regularizer function applied to

the kernel weights matrix.

bias_regularizer: Regularizer function applied to the bias vector. activity_regularizer: Regularizer function applied to

the output of the layer (its “activation”).

kernel_constraint: Constraint function applied to

the kernel weights matrix.

bias_constraint: Constraint function applied to the bias vector.

Input shape:

N-D tensor with shape: (batch_size, …, input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).

Output shape:

N-D tensor with shape: (batch_size, …, units). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, units).

__call__(*args, **kwargs)#

Wraps call, applying pre- and post-processing steps.

Args:

*args: Positional arguments to be passed to self.call. **kwargs: Keyword arguments to be passed to self.call.

Returns:

Output tensor(s).

Note:
  • The following optional keyword arguments are reserved for specific uses: * training: Boolean scalar tensor of Python boolean indicating

    whether the call is meant for training or inference.

    • mask: Boolean input mask.

  • If the layer’s call method takes a mask argument (as some Keras layers do), its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support.

  • If the layer is not built, the method will call build.

Raises:
ValueError: if the layer’s call method returns None (an invalid

value).

RuntimeError: if super().__init__() was not called in the

constructor.

property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

build(input_shape)[source]#

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Args:
input_shape: Instance of TensorShape, or list of instances of

TensorShape if the layer expects a list of inputs (one instance per input).

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(inputs)[source]#

This is where the layer’s logic lives.

The call() method may not create state (except in its first invocation, wrapping the creation of variables or other resources in tf.init_scope()). It is recommended to create state, including tf.Variable instances and nested Layer instances,

in __init__(), or in the build() method that is

called automatically before call() executes for the first time.

Args:
inputs: Input tensor, or dict/list/tuple of input tensors.

The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero

arguments, and inputs cannot be provided via the default value of a keyword argument.

  • NumPy array or Python scalar values in inputs get cast as tensors.

  • Keras mask metadata is only collected from inputs.

  • Layers are built (build(input_shape) method) using shape info from inputs only.

  • input_spec compatibility is only checked against inputs.

  • Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.

  • The SavedModel input specification is generated using inputs only.

  • Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.

*args: Additional positional arguments. May contain tensors, although

this is not recommended, for the reasons above.

**kwargs: Additional keyword arguments. May contain tensors, although

this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating

whether the call is meant for training or inference.

  • mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).

Returns:

A tensor or list/tuple of tensors.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_output_shape(input_shape)[source]#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

classmethod from_config(config)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_config()[source]#

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weights()#

Returns the current weights of the layer, as NumPy arrays.

The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Returns:

Weights values as a list of NumPy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

property metrics#

List of metrics attached to the layer.

Returns:

A list of Metric objects.

property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property stateful#
property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.EquivariantModule(*args, **kwargs)[source]#

Bases: Model

Implements an equivariant module performing an equivariant transform.

For details and justification, see:

[1] Bloem-Reddy, B., & Teh, Y. W. (2020). Probabilistic Symmetries and Invariant Neural Networks. J. Mach. Learn. Res., 21, 90-1. https://www.jmlr.org/papers/volume21/19-322/19-322.pdf

Creates an equivariant module according to [1] which combines equivariant transforms with nested invariant transforms, thereby enabling interactions between set members.

Parameters:
settingsdict

A dictionary holding the configuration settings for the module.

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the tf.keras.Model constructor.

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs the forward pass of a learnable equivariant transform.

Parameters:
xtf.Tensor

Input of shape (batch_size, …, x_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, …, equiv_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.GRU(*args, **kwargs)[source]#

Bases: DropoutRNNCellMixin, RNN, BaseRandomLayer

Gated Recurrent Unit - Cho et al. 2014.

See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) for details about the usage of RNN API.

Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the cuDNN kernel (see below for details), the layer will use a fast cuDNN implementation.

The requirements to use the cuDNN implementation are:

  1. activation == tanh

  2. recurrent_activation == sigmoid

  3. recurrent_dropout == 0

  4. unroll is False

  5. use_bias is True

  6. reset_after is True

  7. Inputs, if use masking, are strictly right-padded.

  8. Eager execution is enabled in the outermost context.

There are two variants of the GRU implementation. The default one is based on [v3](https://arxiv.org/abs/1406.1078v3) and has reset gate applied to hidden state before matrix multiplication. The other one is based on [original](https://arxiv.org/abs/1406.1078v1) and has the order reversed.

The second variant is compatible with CuDNNGRU (GPU-only) and allows inference on CPU. Thus it has separate biases for kernel and recurrent_kernel. To use this variant, set reset_after=True and recurrent_activation=’sigmoid’.

For example:

>>> inputs = tf.random.normal([32, 10, 8])
>>> gru = tf.keras.layers.GRU(4)
>>> output = gru(inputs)
>>> print(output.shape)
(32, 4)
>>> gru = tf.keras.layers.GRU(4, return_sequences=True, return_state=True)
>>> whole_sequence_output, final_state = gru(inputs)
>>> print(whole_sequence_output.shape)
(32, 10, 4)
>>> print(final_state.shape)
(32, 4)
Args:

units: Positive integer, dimensionality of the output space. activation: Activation function to use.

Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

recurrent_activation: Activation function to use

for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

use_bias: Boolean, (default True), whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix,

used for the linear transformation of the inputs. Default: glorot_uniform.

recurrent_initializer: Initializer for the recurrent_kernel

weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal.

bias_initializer: Initializer for the bias vector. Default: zeros. kernel_regularizer: Regularizer function applied to the kernel weights

matrix. Default: None.

recurrent_regularizer: Regularizer function applied to the

recurrent_kernel weights matrix. Default: None.

bias_regularizer: Regularizer function applied to the bias vector.

Default: None.

activity_regularizer: Regularizer function applied to the output of the

layer (its “activation”). Default: None.

kernel_constraint: Constraint function applied to the kernel weights

matrix. Default: None.

recurrent_constraint: Constraint function applied to the

recurrent_kernel weights matrix. Default: None.

bias_constraint: Constraint function applied to the bias vector. Default:

None.

dropout: Float between 0 and 1. Fraction of the units to drop for the

linear transformation of the inputs. Default: 0.

recurrent_dropout: Float between 0 and 1. Fraction of the units to drop

for the linear transformation of the recurrent state. Default: 0.

return_sequences: Boolean. Whether to return the last output

in the output sequence, or the full sequence. Default: False.

return_state: Boolean. Whether to return the last state in addition to the

output. Default: False.

go_backwards: Boolean (default False).

If True, process the input sequence backwards and return the reversed sequence.

stateful: Boolean (default False). If True, the last state

for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.

unroll: Boolean (default False).

If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

time_major: The shape format of the inputs and outputs tensors.

If True, the inputs and outputs will be in shape [timesteps, batch, feature], whereas in the False case, it will be [batch, timesteps, feature]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.

reset_after: GRU convention (whether to apply reset gate after or

before matrix multiplication). False = “before”, True = “after” (default and cuDNN compatible).

Call arguments:

inputs: A 3D tensor, with shape [batch, timesteps, feature]. mask: Binary tensor of shape [samples, timesteps] indicating whether

a given timestep should be masked (optional). An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored. Defaults to None.

training: Python boolean indicating whether the layer should behave in

training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used (optional). Defaults to None.

initial_state: List of initial state tensors to be passed to the first

call of the cell (optional, None causes creation of zero-filled initial state tensors). Defaults to None.

Initialize the BaseRandomLayer.

Note that the constructor is annotated with @no_automatic_dependency_tracking. This is to skip the auto tracking of self._random_generator instance, which is an AutoTrackable. The backend.RandomGenerator could contain a tf.random.Generator instance which will have tf.Variable as the internal state. We want to avoid saving that state into model.weights and checkpoints for backward compatibility reason. In the meantime, we still need to make them visible to SavedModel when it is tracing the tf.function for the call(). See _list_extra_dependencies_for_serialization below for more details.

Args:

seed: optional integer, used to create RandomGenerator. force_generator: boolean, default to False, whether to force the

RandomGenerator to use the code branch of tf.random.Generator.

rng_type: string, the rng type that will be passed to backend

RandomGenerator. None will allow RandomGenerator to choose types by itself. Valid values are “stateful”, “stateless”, “legacy_stateful”. Defaults to None.

**kwargs: other keyword arguments that will be passed to the parent

*class

__call__(inputs, initial_state=None, constants=None, **kwargs)#

Call self as a function.

property activation#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property bias_constraint#
property bias_initializer#
property bias_regularizer#
build(input_shape)#

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Args:
input_shape: Instance of TensorShape, or list of instances of

TensorShape if the layer expects a list of inputs (one instance per input).

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(inputs, mask=None, training=None, initial_state=None)[source]#

This is where the layer’s logic lives.

The call() method may not create state (except in its first invocation, wrapping the creation of variables or other resources in tf.init_scope()). It is recommended to create state, including tf.Variable instances and nested Layer instances,

in __init__(), or in the build() method that is

called automatically before call() executes for the first time.

Args:
inputs: Input tensor, or dict/list/tuple of input tensors.

The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero

arguments, and inputs cannot be provided via the default value of a keyword argument.

  • NumPy array or Python scalar values in inputs get cast as tensors.

  • Keras mask metadata is only collected from inputs.

  • Layers are built (build(input_shape) method) using shape info from inputs only.

  • input_spec compatibility is only checked against inputs.

  • Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.

  • The SavedModel input specification is generated using inputs only.

  • Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.

*args: Additional positional arguments. May contain tensors, although

this is not recommended, for the reasons above.

**kwargs: Additional keyword arguments. May contain tensors, although

this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating

whether the call is meant for training or inference.

  • mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).

Returns:

A tensor or list/tuple of tensors.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_mask(inputs, mask)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property dropout#
property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

classmethod from_config(config)[source]#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_config()[source]#

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

get_dropout_mask_for_cell(inputs, training, count=1)#

Get the dropout mask for RNN cell’s input.

It will create mask based on context if there isn’t any existing cached mask. If a new mask is generated, it will update the cache in the cell.

Args:
inputs: The input tensor whose shape will be used to generate dropout

mask.

training: Boolean tensor, whether its in training mode, dropout will

be ignored in non-training mode.

count: Int, how many dropout mask will be generated. It is useful for

cell that has internal weights fused together.

Returns:

List of mask tensor, generated or cached mask based on context.

get_initial_state(inputs)#
get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_recurrent_dropout_mask_for_cell(inputs, training, count=1)#

Get the recurrent dropout mask for RNN cell.

It will create mask based on context if there isn’t any existing cached mask. If a new mask is generated, it will update the cache in the cell.

Args:
inputs: The input tensor whose shape will be used to generate dropout

mask.

training: Boolean tensor, whether its in training mode, dropout will

be ignored in non-training mode.

count: Int, how many dropout mask will be generated. It is useful for

cell that has internal weights fused together.

Returns:

List of mask tensor, generated or cached mask based on context.

get_weights()#

Returns the current weights of the layer, as NumPy arrays.

The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Returns:

Weights values as a list of NumPy arrays.

property implementation#
property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property kernel_constraint#
property kernel_initializer#
property kernel_regularizer#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

property metrics#

List of metrics attached to the layer.

Returns:

A list of Metric objects.

property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

property recurrent_activation#
property recurrent_constraint#
property recurrent_dropout#
property recurrent_initializer#
property recurrent_regularizer#
property reset_after#
reset_dropout_mask()#

Reset the cached dropout masks if any.

This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn’t be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch.

reset_recurrent_dropout_mask()#

Reset the cached recurrent dropout masks if any.

This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn’t be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch.

reset_states(states=None)#

Reset the recorded states for the stateful RNN layer.

Can only be used when RNN layer is constructed with stateful = True. Args:

states: Numpy arrays that contains the value for the initial state,

which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.

Raises:

AttributeError: When the RNN layer is not stateful. ValueError: When the batch size of the RNN layer is unknown. ValueError: When the input numpy array is not compatible with the RNN

layer state, either size wise or dtype wise.

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property stateful#
property states#
property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property units#
property updates#
property use_bias#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.HierarchicalNetwork(*args, **kwargs)[source]#

Bases: Model

Implements a hierarchical summary network according to [1].

[1] Elsemüller, L., Schnuerch, M., Bürkner, P. C., & Radev, S. T. (2023).

A Deep Learning Method for Comparing Bayesian Hierarchical Models. arXiv preprint arXiv:2301.11873.

Creates a hierarchical network consisting of stacked summary networks (one for each hierarchical level) that are aligned with the probabilistic structure of the processed data.

Note: The networks will start processing from the lowest hierarchical level (e.g., observational level) up to the highest hierarchical level. It is recommended to provide higher-level networks with more expressive power to allow for an adequate compression of lower-level data.

Example: For two-level hierarchical models with the assumption of temporal dependencies on the lowest hierarchical level (e.g., observational level) and exchangeable units at the higher level (e.g., group level), a list of [SequenceNetwork(), DeepSet()] could be passed.


Parameters: networks_list : list of tf.keras.Model:

The list of summary networks (one per hierarchical level), starting from the lowest hierarchical level

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, return_all=False, **kwargs)[source]#

Performs the forward pass through the hierarchical network, transforming the nested input into learned summary statistics.

Parameters:
xtf.Tensor of shape (batch_size, …, data_dim)

Example, hierarchical data sets with two levels: (batch_size, D, L, x_dim) -> reduces to (batch_size, out_dim).

return_allboolean, optional, default: False

Whether to return all intermediate outputs (True) or just the final one (False).

Returns:
outtf.Tensor

Output of shape (batch_size, out_dim) if return_all=False else a tuple of len(outputs) == len(networks) corresponding to all outputs of all networks.

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.InducedSelfAttentionBlock(*args, **kwargs)[source]#

Bases: Model

Implements the ISAB block from [1] which represents learnable self-attention specifically designed to deal with large sets via a learnable set of “inducing points”.

[1] Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019).

Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). PMLR.

Creates a self-attention attention block with inducing points (ISAB) which will typically be used as part of a set transformer architecture according to [1].

Parameters:
input_dimint

The dimensionality of the input data (last axis).

attention_settingsdict

A dictionary which will be unpacked as the arguments for the MultiHeadAttention layer See https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention.

num_dense_fcint

The number of hidden layers for the internal feedforward network

dense_settingsdict

A dictionary which will be unpacked as the arguments for the Dense layer

use_layer_normboolean

Whether layer normalization before and after attention + feedforward

num_inducing_pointsint

The number of inducing points. Should be lower than the smallest set size

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the __init__() method of tf.keras.Model

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs the forward pass through the self-attention layer.

Parameters:
xtf.Tensor

Input of shape (batch_size, set_size, input_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, set_size, input_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.InvariantModule(*args, **kwargs)[source]#

Bases: Model

Implements an invariant module performing a permutation-invariant transform.

For details and rationale, see:

[1] Bloem-Reddy, B., & Teh, Y. W. (2020). Probabilistic Symmetries and Invariant Neural Networks. J. Mach. Learn. Res., 21, 90-1. https://www.jmlr.org/papers/volume21/19-322/19-322.pdf

Creates an invariant module according to [1] which represents a learnable permutation-invariant function with an option for learnable pooling.

Parameters:
settingsdict

A dictionary holding the configuration settings for the module.

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the tf.keras.Model constructor.

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs the forward pass of a learnable invariant transform.

Parameters:
xtf.Tensor

Input of shape (batch_size,…, x_dim)

Returns:
outtf.Tensor

Output of shape (batch_size,…, out_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.InvariantNetwork(*args, **kwargs)[source]#

Bases: DeepSet

Deprecated class for InvariantNetwork.

Creates a stack of ‘num_equiv’ equivariant layers followed by a final invariant layer.

Parameters:
summary_dimint, optional, default: 10

The number of learned summary statistics.

num_dense_s1int, optional, default: 2

The number of dense layers in the inner function of a deep set.

num_dense_s2int, optional, default: 2

The number of dense layers in the outer function of a deep set.

num_dense_s3int, optional, default: 2

The number of dense layers in an equivariant layer.

num_equivint, optional, default: 2

The number of equivariant layers in the network.

dense_s1_argsdict or None, optional, default: None

The arguments for the dense layers of s1 (inner, pre-pooling function). If None, defaults will be used (see default_settings). Otherwise, all arguments for a tf.keras.layers.Dense layer are supported.

dense_s2_argsdict or None, optional, default: None

The arguments for the dense layers of s2 (outer, post-pooling function). If None, defaults will be used (see default_settings). Otherwise, all arguments for a tf.keras.layers.Dense layer are supported.

dense_s3_argsdict or None, optional, default: None

The arguments for the dense layers of s3 (equivariant function). If None, defaults will be used (see default_settings). Otherwise, all arguments for a tf.keras.layers.Dense layer are supported.

pooling_funstr of callable, optional, default: ‘mean’

If string argument provided, should be one in [‘mean’, ‘max’]. In addition, ac actual neural network can be passed for learnable pooling.

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the __init__() method of tf.keras.Model.

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)#

Performs the forward pass of a learnable deep invariant transformation consisting of a sequence of equivariant transforms followed by an invariant transform.

Parameters:
xtf.Tensor

Input of shape (batch_size, n_obs, data_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, out_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.LSTM(*args, **kwargs)[source]#

Bases: DropoutRNNCellMixin, RNN, BaseRandomLayer

Long Short-Term Memory layer - Hochreiter 1997.

See [the Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn) for details about the usage of RNN API.

Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the cuDNN kernel (see below for details), the layer will use a fast cuDNN implementation.

The requirements to use the cuDNN implementation are:

  1. activation == tanh

  2. recurrent_activation == sigmoid

  3. recurrent_dropout == 0

  4. unroll is False

  5. use_bias is True

  6. Inputs, if use masking, are strictly right-padded.

  7. Eager execution is enabled in the outermost context.

For example:

>>> inputs = tf.random.normal([32, 10, 8])
>>> lstm = tf.keras.layers.LSTM(4)
>>> output = lstm(inputs)
>>> print(output.shape)
(32, 4)
>>> lstm = tf.keras.layers.LSTM(4, return_sequences=True, return_state=True)
>>> whole_seq_output, final_memory_state, final_carry_state = lstm(inputs)
>>> print(whole_seq_output.shape)
(32, 10, 4)
>>> print(final_memory_state.shape)
(32, 4)
>>> print(final_carry_state.shape)
(32, 4)
Args:

units: Positive integer, dimensionality of the output space. activation: Activation function to use.

Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

recurrent_activation: Activation function to use for the recurrent step.

Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

use_bias: Boolean (default True), whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix, used for

the linear transformation of the inputs. Default: glorot_uniform.

recurrent_initializer: Initializer for the recurrent_kernel weights

matrix, used for the linear transformation of the recurrent state. Default: orthogonal.

bias_initializer: Initializer for the bias vector. Default: zeros. unit_forget_bias: Boolean (default True). If True, add 1 to the bias of

the forget gate at initialization. Setting it to true will also force bias_initializer=”zeros”. This is recommended in [Jozefowicz et

kernel_regularizer: Regularizer function applied to the kernel weights

matrix. Default: None.

recurrent_regularizer: Regularizer function applied to the

recurrent_kernel weights matrix. Default: None.

bias_regularizer: Regularizer function applied to the bias vector.

Default: None.

activity_regularizer: Regularizer function applied to the output of the

layer (its “activation”). Default: None.

kernel_constraint: Constraint function applied to the kernel weights

matrix. Default: None.

recurrent_constraint: Constraint function applied to the

recurrent_kernel weights matrix. Default: None.

bias_constraint: Constraint function applied to the bias vector. Default:

None.

dropout: Float between 0 and 1. Fraction of the units to drop for the

linear transformation of the inputs. Default: 0.

recurrent_dropout: Float between 0 and 1. Fraction of the units to drop

for the linear transformation of the recurrent state. Default: 0.

return_sequences: Boolean. Whether to return the last output in the output

sequence, or the full sequence. Default: False.

return_state: Boolean. Whether to return the last state in addition to the

output. Default: False.

go_backwards: Boolean (default False). If True, process the input

sequence backwards and return the reversed sequence.

stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample

of index i in the following batch.

time_major: The shape format of the inputs and outputs tensors.

If True, the inputs and outputs will be in shape [timesteps, batch, feature], whereas in the False case, it will be [batch, timesteps, feature]. Using time_major = True is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.

unroll: Boolean (default False). If True, the network will be unrolled,

else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.

Call arguments:

inputs: A 3D tensor with shape [batch, timesteps, feature]. mask: Binary tensor of shape [batch, timesteps] indicating whether

a given timestep should be masked (optional). An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored. Defaults to None.

training: Python boolean indicating whether the layer should behave in

training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used (optional). Defaults to None.

initial_state: List of initial state tensors to be passed to the first

call of the cell (optional, None causes creation of zero-filled initial state tensors). Defaults to None.

Initialize the BaseRandomLayer.

Note that the constructor is annotated with @no_automatic_dependency_tracking. This is to skip the auto tracking of self._random_generator instance, which is an AutoTrackable. The backend.RandomGenerator could contain a tf.random.Generator instance which will have tf.Variable as the internal state. We want to avoid saving that state into model.weights and checkpoints for backward compatibility reason. In the meantime, we still need to make them visible to SavedModel when it is tracing the tf.function for the call(). See _list_extra_dependencies_for_serialization below for more details.

Args:

seed: optional integer, used to create RandomGenerator. force_generator: boolean, default to False, whether to force the

RandomGenerator to use the code branch of tf.random.Generator.

rng_type: string, the rng type that will be passed to backend

RandomGenerator. None will allow RandomGenerator to choose types by itself. Valid values are “stateful”, “stateless”, “legacy_stateful”. Defaults to None.

**kwargs: other keyword arguments that will be passed to the parent

*class

__call__(inputs, initial_state=None, constants=None, **kwargs)#

Call self as a function.

property activation#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property bias_constraint#
property bias_initializer#
property bias_regularizer#
build(input_shape)#

Creates the variables of the layer (for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution of call().

This is typically used to create the weights of Layer subclasses (at the discretion of the subclass implementer).

Args:
input_shape: Instance of TensorShape, or list of instances of

TensorShape if the layer expects a list of inputs (one instance per input).

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(inputs, mask=None, training=None, initial_state=None)[source]#

This is where the layer’s logic lives.

The call() method may not create state (except in its first invocation, wrapping the creation of variables or other resources in tf.init_scope()). It is recommended to create state, including tf.Variable instances and nested Layer instances,

in __init__(), or in the build() method that is

called automatically before call() executes for the first time.

Args:
inputs: Input tensor, or dict/list/tuple of input tensors.

The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero

arguments, and inputs cannot be provided via the default value of a keyword argument.

  • NumPy array or Python scalar values in inputs get cast as tensors.

  • Keras mask metadata is only collected from inputs.

  • Layers are built (build(input_shape) method) using shape info from inputs only.

  • input_spec compatibility is only checked against inputs.

  • Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.

  • The SavedModel input specification is generated using inputs only.

  • Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.

*args: Additional positional arguments. May contain tensors, although

this is not recommended, for the reasons above.

**kwargs: Additional keyword arguments. May contain tensors, although

this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating

whether the call is meant for training or inference.

  • mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).

Returns:

A tensor or list/tuple of tensors.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_mask(inputs, mask)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property dropout#
property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

classmethod from_config(config)[source]#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_config()[source]#

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Returns:

Python dictionary.

get_dropout_mask_for_cell(inputs, training, count=1)#

Get the dropout mask for RNN cell’s input.

It will create mask based on context if there isn’t any existing cached mask. If a new mask is generated, it will update the cache in the cell.

Args:
inputs: The input tensor whose shape will be used to generate dropout

mask.

training: Boolean tensor, whether its in training mode, dropout will

be ignored in non-training mode.

count: Int, how many dropout mask will be generated. It is useful for

cell that has internal weights fused together.

Returns:

List of mask tensor, generated or cached mask based on context.

get_initial_state(inputs)#
get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_recurrent_dropout_mask_for_cell(inputs, training, count=1)#

Get the recurrent dropout mask for RNN cell.

It will create mask based on context if there isn’t any existing cached mask. If a new mask is generated, it will update the cache in the cell.

Args:
inputs: The input tensor whose shape will be used to generate dropout

mask.

training: Boolean tensor, whether its in training mode, dropout will

be ignored in non-training mode.

count: Int, how many dropout mask will be generated. It is useful for

cell that has internal weights fused together.

Returns:

List of mask tensor, generated or cached mask based on context.

get_weights()#

Returns the current weights of the layer, as NumPy arrays.

The weights of a layer represent the state of the layer. This function returns both trainable and non-trainable weight values associated with this layer as a list of NumPy arrays, which can in turn be used to load state into similarly parameterized layers.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Returns:

Weights values as a list of NumPy arrays.

property implementation#
property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property kernel_constraint#
property kernel_initializer#
property kernel_regularizer#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

property metrics#

List of metrics attached to the layer.

Returns:

A list of Metric objects.

property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

property recurrent_activation#
property recurrent_constraint#
property recurrent_dropout#
property recurrent_initializer#
property recurrent_regularizer#
reset_dropout_mask()#

Reset the cached dropout masks if any.

This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn’t be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch.

reset_recurrent_dropout_mask()#

Reset the cached recurrent dropout masks if any.

This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn’t be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch.

reset_states(states=None)#

Reset the recorded states for the stateful RNN layer.

Can only be used when RNN layer is constructed with stateful = True. Args:

states: Numpy arrays that contains the value for the initial state,

which will be feed to cell at the first time step. When the value is None, zero filled numpy array will be created based on the cell state size.

Raises:

AttributeError: When the RNN layer is not stateful. ValueError: When the batch size of the RNN layer is unknown. ValueError: When the input numpy array is not compatible with the RNN

layer state, either size wise or dtype wise.

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property stateful#
property states#
property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property unit_forget_bias#
property units#
property updates#
property use_bias#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.MultiConv1D(*args, **kwargs)[source]#

Bases: Model

Implements an inception-inspired 1D convolutional layer using different kernel sizes.

Creates an inception-like Conv1D layer

Parameters:
settingsdict

A dictionary which holds the arguments for the internal Conv1D layers.

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs a forward pass through the layer.

Parameters:
xtf.Tensor

Input of shape (batch_size, n_time_steps, n_time_series)

Returns:
outtf.Tensor

Output of shape (batch_size, n_time_steps, n_filters)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.MultiHeadAttentionBlock(*args, **kwargs)[source]#

Bases: Model

Implements the MAB block from [1] which represents learnable cross-attention.

[1] Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019).

Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). PMLR.

Creates a multihead attention block which will typically be used as part of a set transformer architecture according to [1]. Corresponds to standard cross-attention.

Parameters:
input_dimint

The dimensionality of the input data (last axis).

attention_settingsdict

A dictionary which will be unpacked as the arguments for the MultiHeadAttention layer See https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention.

num_dense_fcint

The number of hidden layers for the internal feedforward network

dense_settingsdict

A dictionary which will be unpacked as the arguments for the Dense layer

use_layer_normboolean

Whether layer normalization before and after attention + feedforward

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the __init__() method of tf.keras.Model

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, y, **kwargs)[source]#

Performs the forward pass through the attention layer.

Parameters:
xtf.Tensor

Input of shape (batch_size, set_size_x, input_dim)

ytf.Tensor

Input of shape (batch_size, set_size_y, input_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, set_size_x, input_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.PoolingWithAttention(*args, **kwargs)[source]#

Bases: Model

Implements the pooling with multihead attention (PMA) block from [1] which represents a permutation-invariant encoder for set-based inputs.

[1] Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019).

Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). PMLR.

Creates a multihead attention block (MAB) which will perform cross-attention between an input set and a set of seed vectors (typically one for a single summary) with summary_dim output dimensions.

Could also be used as part of a DeepSet for representing learnabl instead of fixed pooling.

Parameters:
summary_dimint

The dimensionality of the learned permutation-invariant representation.

attention_settingsdict

A dictionary which will be unpacked as the arguments for the MultiHeadAttention layer See https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention.

num_dense_fcint

The number of hidden layers for the internal feedforward network

dense_settingsdict

A dictionary which will be unpacked as the arguments for the Dense layer

use_layer_normboolean

Whether layer normalization before and after attention + feedforward

num_seedsint, optional, default: 1

The number of “seed vectors” to use. Each seed vector represents a permutation-invariant summary of the entire set. If you use num_seeds > 1, the resulting seeds will be flattened into a 2-dimensional output, which will have a dimensionality of num_seeds * summary_dim

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the __init__() method of tf.keras.Model

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs the forward pass through the PMA block.

Parameters:
xtf.Tensor

Input of shape (batch_size, set_size, input_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, num_seeds * summary_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.SelfAttentionBlock(*args, **kwargs)[source]#

Bases: Model

Implements the SAB block from [1] which represents learnable self-attention.

[1] Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019).

Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). PMLR.

Creates a self-attention attention block which will typically be used as part of a set transformer architecture according to [1].

Parameters:
input_dimint

The dimensionality of the input data (last axis).

attention_settingsdict

A dictionary which will be unpacked as the arguments for the MultiHeadAttention layer See https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention.

num_dense_fcint

The number of hidden layers for the internal feedforward network

dense_settingsdict

A dictionary which will be unpacked as the arguments for the Dense layer

use_layer_normboolean

Whether layer normalization before and after attention + feedforward

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the __init__() method of tf.keras.Model

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs the forward pass through the self-attention layer.

Parameters:
xtf.Tensor

Input of shape (batch_size, set_size, input_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, set_size, input_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.SequenceNetwork(*args, **kwargs)[source]#

Bases: Model

Implements a sequence of MultiConv1D layers followed by an (bidirectional) LSTM network.

For details and rationale, see [1]:

[1] Radev, S. T., Graw, F., Chen, S., Mutters, N. T., Eichel, V. M., Bärnighausen, T., & Köthe, U. (2021). OutbreakFlow: Model-based Bayesian inference of disease outbreak dynamics with invertible neural networks and its application to the COVID-19 pandemics in Germany. PLoS computational biology, 17(10), e1009472.

https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009472

Creates a stack of inception-like layers followed by an LSTM network, with the idea of learning vector representations from multivariate time series data.

Parameters:
summary_dimint, optional, default: 10

The number of learned summary statistics.

num_conv_layersint, optional, default: 2

The number of convolutional layers to use.

lstm_unitsint, optional, default: 128

The number of hidden LSTM units.

conv_settingsdict or None, optional, default: None

The arguments passed to the MultiConv1D internal networks. If None, defaults will be used from default_settings. If a dictionary is provided, it should contain the following keys: - layer_args (dict) : arguments for tf.keras.layers.Conv1D without kernel_size - min_kernel_size (int) : the minimum kernel size (>= 1) - max_kernel_size (int) : the maximum kernel size

bidirectionalbool, optional, default: False

Indicates whether the involved LSTM network is bidirectional (forward and backward in time) or unidirectional (forward in time). Defaults to False, but may increase performance.

**kwargsdict

Optional keyword arguments passed to the __init__() method of tf.keras.Model

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs a forward pass through the network by first passing x through the sequence of multi-convolutional layers and then applying the LSTM network.

Parameters:
xtf.Tensor

Input of shape (batch_size, n_time_steps, n_time_series)

Returns:
outtf.Tensor

Output of shape (batch_size, summary_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.Sequential(*args, **kwargs)[source]#

Bases: Functional

Sequential groups a linear stack of layers into a tf.keras.Model.

Sequential provides training and inference features on this model.

Examples:

```python model = tf.keras.Sequential() model.add(tf.keras.Input(shape=(16,))) model.add(tf.keras.layers.Dense(8))

# Note that you can also omit the initial Input. # In that case the model doesn’t have any weights until the first call # to a training/evaluation method (since it isn’t yet built): model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(8)) model.add(tf.keras.layers.Dense(4)) # model.weights not created yet

# Whereas if you specify an Input, the model gets built # continuously as you are adding layers: model = tf.keras.Sequential() model.add(tf.keras.Input(shape=(16,))) model.add(tf.keras.layers.Dense(4)) len(model.weights) # Returns “2”

# When using the delayed-build pattern (no input shape specified), you can # choose to manually build your model by calling # build(batch_input_shape): model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(8)) model.add(tf.keras.layers.Dense(4)) model.build((None, 16)) len(model.weights) # Returns “4”

# Note that when using the delayed-build pattern (no input shape specified), # the model gets built the first time you call fit, eval, or predict, # or the first time you call the model on some input data. model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(8)) model.add(tf.keras.layers.Dense(1)) model.compile(optimizer=’sgd’, loss=’mse’) # This builds the model for the first time: model.fit(x, y, batch_size=32, epochs=10) ```

Creates a Sequential model instance.

Args:

layers: Optional list of layers to add to the model. name: Optional name for the model.

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add(layer)[source]#

Adds a layer instance on top of the layer stack.

Args:

layer: layer instance.

Raises:

TypeError: If layer is not a layer instance. ValueError: In case the layer argument does not

know its input shape.

ValueError: In case the layer argument has

multiple output tensors, or is already connected somewhere else (forbidden in Sequential models).

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape=None)[source]#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(inputs, training=None, mask=None)[source]#

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Args:

inputs: A tensor or list of tensors. training: Boolean or boolean scalar tensor, indicating whether to

run the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be

either a tensor or None (no mask).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask)[source]#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)[source]#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)[source]#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()[source]#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

pop()[source]#

Removes the last layer in the model.

Raises:

TypeError: if there are no layers in the model.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.SequentialNetwork(*args, **kwargs)[source]#

Bases: SequenceNetwork

Deprecated class for amortizer posterior estimation.

Creates a stack of inception-like layers followed by an LSTM network, with the idea of learning vector representations from multivariate time series data.

Parameters:
summary_dimint, optional, default: 10

The number of learned summary statistics.

num_conv_layersint, optional, default: 2

The number of convolutional layers to use.

lstm_unitsint, optional, default: 128

The number of hidden LSTM units.

conv_settingsdict or None, optional, default: None

The arguments passed to the MultiConv1D internal networks. If None, defaults will be used from default_settings. If a dictionary is provided, it should contain the following keys: - layer_args (dict) : arguments for tf.keras.layers.Conv1D without kernel_size - min_kernel_size (int) : the minimum kernel size (>= 1) - max_kernel_size (int) : the maximum kernel size

bidirectionalbool, optional, default: False

Indicates whether the involved LSTM network is bidirectional (forward and backward in time) or unidirectional (forward in time). Defaults to False, but may increase performance.

**kwargsdict

Optional keyword arguments passed to the __init__() method of tf.keras.Model

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)#

Performs a forward pass through the network by first passing x through the sequence of multi-convolutional layers and then applying the LSTM network.

Parameters:
xtf.Tensor

Input of shape (batch_size, n_time_steps, n_time_series)

Returns:
outtf.Tensor

Output of shape (batch_size, summary_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.SetTransformer(*args, **kwargs)[source]#

Bases: Model

Implements the set transformer architecture from [1] which ultimately represents a learnable permutation-invariant function. Designed to naturally model interactions in the input set, which may be hard to capture with the simpler DeepSet architecture.

[1] Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019).

Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). PMLR.

Creates a set transformer architecture according to [1] which will extract permutation-invariant features from an input set using a set of seed vectors (typically one for a single summary) with summary_dim output dimensions.

Recommended: When using transformers as summary networks, you may want to use a smaller learning rate during training, e.g., setting default_lr=1e-4 in a Trainer instance.

Parameters:
input_dimint

The dimensionality of the input data (last axis).

attention_settingsdict or None, optional, default: None

A dictionary which will be unpacked as the arguments for the MultiHeadAttention layer For instance, to use an attention block with 4 heads and key dimension 32, you can do:

attention_settings=dict(num_heads=4, key_dim=32)

You may also want to include stronger dropout regularization in small-to-medium data regimes:

attention_settings=dict(num_heads=4, key_dim=32, dropout=0.1)

For more details and arguments, see: https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention

dense_settingsdict or None, optional, default: None

A dictionary which will be unpacked as the arguments for the Dense layer. For instance, to use hidden layers with 32 units and a relu activation, you can do:

``dict(units=32, activation=’relu’)

For more details and arguments, see: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense

use_layer_normboolean, optional, default: False

Whether to use layer normalization before and after attention + feedforward

num_dense_fcint, optional, default: 2

The number of hidden layers for the internal feedforward network

summary_dimint

The dimensionality of the learned permutation-invariant representation.

num_attention_blocksint, optional, default: 2

The number of self-attention blocks to use before pooling.

num_inducing_pointsint or None, optional, default: 32

The number of inducing points. Should be lower than the smallest set size. If None selected, a vanilla self-attention block (SAB) will be used, otherwise ISAB blocks will be used. For num_attention_blocks > 1, we currently recommend always using some number of inducing points.

num_seedsint, optional, default: 1

The number of “seed vectors” to use. Each seed vector represents a permutation-invariant summary of the entire set. If you use num_seeds > 1, the resulting seeds will be flattened into a 2-dimensional output, which will have a dimensionality of num_seeds * summary_dim.

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the __init__() method of tf.keras.Model

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs the forward pass through the set-transformer.

Parameters:
xtf.Tensor

The input set of shape (batch_size, set_size, input_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, summary_dim * num_seeds)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.SplitNetwork(*args, **kwargs)[source]#

Bases: Model

Implements a vertical stack of networks and concatenates their individual outputs. Allows for splitting of data to provide an individual network for each split of the data.

Creates a composite network of num_splits subnetworks of type network_type, each with configuration specified by meta.

Parameters:
num_splitsint

The number if splits for the data, which will equal the number of sub-networks.

split_data_configuratorcallable

Function that takes the arguments i and x where i is the index of the network and x are the inputs to the SplitNetwork. Should return the input for the corresponding network.

For example, to achieve a network with is permutation-invariant both vertically (i.e., across rows) and horizontally (i.e., across columns), one could to: `def split(i, x):

selector = tf.where(x[:,:,0]==i, 1.0, 0.0) selected = x[:,:,1] * selector split_x = tf.stack((selector, selected), axis=-1) return split_x

` where x[:,:,0] contains an integer indicating which split the data in x[:,:,1] belongs to. All values in x[:,:,1] that are not selected are set to zero. The selector is passed along with the modified data, indicating which rows belong to the split i.

network_typecallable, optional, default: InvariantNetowk

Type of neural network to use.

network_kwargsdict, optional, default: {}

A dictionary containing the configuration for the networks.

**kwargs

Optional keyword arguments to be passed to the tf.keras.Model superclass.

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs a forward pass through the subnetworks and concatenates their output.

Parameters:
xtf.Tensor

Input of shape (batch_size, n_obs, data_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, out_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

class bayesflow.summary_networks.TimeSeriesTransformer(*args, **kwargs)[source]#

Bases: Model

Implements a many-to-one transformer architecture for time series encoding. Some ideas can be found in [1]:

[1] Wen, Q., Zhou, T., Zhang, C., Chen, W., Ma, Z., Yan, J., & Sun, L. (2022). Transformers in time series: A survey. arXiv preprint arXiv:2202.07125. https://arxiv.org/abs/2202.07125

Creates a transformer architecture for encoding time series data into fixed size vectors given by summary_dim. It features a recurrent network given by template_type which is responsible for providing a single summary of the time series which then attends to each point in the time series pro- cessed via a series of num_attention_blocks self-attention layers.

Important: Assumes that positional encodings have been appended to the input time series, e.g., through a custom configurator.

Recommended: When using transformers as summary networks, you may want to use a smaller learning rate during training, e.g., setting default_lr=5e-5 in a Trainer instance.

Layer normalization (controllable through the use_layer_norm keyword argument) may not always work well in certain applications. Consider setting it to False if the network is underperforming.

Parameters:
input_dimint

The dimensionality of the input data (last axis).

attention_settingsdict or None, optional, default None

A dictionary which will be unpacked as the arguments for the MultiHeadAttention layer. If None, default settings will be used (see bayesflow.default_settings) For instance, to use an attention block with 4 heads and key dimension 32, you can do:

attention_settings=dict(num_heads=4, key_dim=32)

You may also want to include dropout regularization in small-to-medium data regimes:

attention_settings=dict(num_heads=4, key_dim=32, dropout=0.1)

For more details and arguments, see: https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention

dense_settingsdict or None, optional, default: None

A dictionary which will be unpacked as the arguments for the Dense layer. For instance, to use hidden layers with 32 units and a relu activation, you can do:

``dict(units=32, activation=’relu’)

For more details and arguments, see: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense

use_layer_normboolean, optional, default: True

Whether layer normalization before and after attention + feedforward

num_dense_fcint, optional, default: 2

The number of hidden layers for the internal feedforward network

summary_dimint

The dimensionality of the learned permutation-invariant representation.

num_attention_blocksint, optional, default: 2

The number of self-attention blocks to use before pooling.

template_typestr or callable, optional, default: ‘lstm’

The many-to-one (learnable) transformation of the time series. if lstm, an LSTM network will be used. if gru, a GRU unit will be used. if callable, a reference to template_type will be stored as an attribute.

bidirectionalbool, optional, default: False

Indicates whether the involved LSTM template network is bidirectional (i.e., forward and backward in time) or unidirectional (forward in time). Defaults to False, but may increase performance in some applications.

template_dimint, optional, default: 64

Only used if template_type in [‘lstm’, ‘gru’]. The number of hidden units (equiv. output dimensions) of the recurrent network. When using bidirectional=True, the output dimensions of the template will be double the template_dim size, so consider reducing it in half.

**kwargsdict, optional, default: {}

Optional keyword arguments passed to the __init__() method of tf.keras.Model

__call__(*args, **kwargs)#
property activity_regularizer#

Optional regularizer function for the output of this layer.

add_loss(losses, **kwargs)#

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)#

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)#

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)#

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)#

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

property autotune_steps_per_execution#

Settable property to enable tuning for steps_per_execution

build(input_shape)#

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)#

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(x, **kwargs)[source]#

Performs the forward pass through the transformer.

Parameters:
xtf.Tensor

Time series input of shape (batch_size, num_time_points, input_dim)

Returns:
outtf.Tensor

Output of shape (batch_size, summary_dim)

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)#

Configures the model for training.

Example:

```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),

loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),

tf.keras.metrics.FalseNegatives()])

```

Args:
optimizer: String (name of optimizer) or optimizer instance. See

tf.keras.optimizers.

loss: Loss function. May be a string (name of loss function), or

a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.

metrics: List of metrics to be evaluated by the model during

training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’:’accuracy’, ‘output_b’:[‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.

loss_weights: Optional list or dictionary specifying scalar

coefficients (Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients. If a list, it is expected to have a 1:1 mapping to the model’s outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.

weighted_metrics: List of metrics to be evaluated and weighted by

sample_weight or class_weight during training and testing.

run_eagerly: Bool. If True, this Model’s logic will not be

wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy. Defaults to

False.

steps_per_execution: Int or ‘auto’. The number of batches to

run during each tf.function call. If set to “auto”, keras will automatically tune steps_per_execution during runtime. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs, when used with distributed strategies such as ParameterServerStrategy, or with small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution). Defaults to 1.

jit_compile: If True, compile the model training step with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled for by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

pss_evaluation_shards: Integer or ‘auto’. Used for

tf.distribute.ParameterServerStrategy training only. This arg sets the number of shards to split the dataset into, to enable an exact visitation guarantee for evaluation, meaning the model will be applied to each dataset element exactly once, even if workers fail. The dataset must be sharded to ensure separate workers do not process the same data. The number of shards should be at least the number of workers for good performance. A value of ‘auto’ turns on exact evaluation and uses a heuristic for the number of shards based on the number of workers. 0, meaning no visitation guarantee is provided. NOTE: Custom implementations of Model.test_step will be ignored when doing exact evaluation. Defaults to 0.

**kwargs: Arguments supported for backwards compatibility only.

compile_from_config(config)#

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

property compute_dtype#

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)#

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)#

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)#

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)#

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)#

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()#

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

property distribute_reduction_method#

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy#

The tf.distribute.Strategy this model was created under.

property dtype#

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy#

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic#

Whether the layer is dynamic (eager-only); set in the constructor.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)#

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

export(filepath)#

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()#

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)#

Trains the model for a fixed number of epochs (dataset iterations).

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

  • A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If these include sample_weights as a third component, note that sample weighting applies to the weighted_metrics argument but not the metrics argument in compile(). If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.

y: Target data. Like the input data x,

it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).

batch_size: Integer or None.

Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

epochs: Integer. Number of epochs to train the model.

An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.

verbose: ‘auto’, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ becomes 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment). Defaults to ‘auto’.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.

validation_split: Float between 0 and 1.

Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance. If both validation_data and validation_split are provided, validation_data will override validation_split. validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

validation_data: Data on which to evaluate

the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:

  • A tuple (x_val, y_val) of Numpy arrays or tensors.

  • A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.

  • A tf.data.Dataset.

  • A Python generator or keras.utils.Sequence returning

(inputs, targets) or (inputs, targets, sample_weights).

validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.

shuffle: Boolean (whether to shuffle the training data

before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

sample_weight: Optional Numpy array of weights for

the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or keras.utils.Sequence instance, instead provide the sample_weights as the third element of x. Note that sample weighting does not apply to metrics specified via the metrics argument in compile(). To apply sample weighting to your metrics, you can specify them via the weighted_metrics in compile() instead.

initial_epoch: Integer.

Epoch at which to start training (useful for resuming a previous training run).

steps_per_epoch: Integer or None.

Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:

  • steps_per_epoch=None is not supported.

validation_steps: Only relevant if validation_data is provided and

is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.

validation_batch_size: Integer or None.

Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).

validation_freq: Only relevant if validation data is provided.

Integer or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

Unpacking behavior for iterator-like inputs:

A common pattern is to pass a tf.data.Dataset, generator, or

tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.

A notable unsupported data type is the namedtuple. The reason is

that it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:

namedtuple(“example_tuple”, [“y”, “x”])

it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:

namedtuple(“other_tuple”, [“x”, “y”, “z”])

where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)

Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

Raises:

RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.

ValueError: In case of mismatch between the provided input data

and what the model expects or when the input data is empty.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)#

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)#

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()#

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()#

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()#

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_input_at(node_index)#

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)#

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)#

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)#

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()#

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)#

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)#

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)#

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_weight_paths()#

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()#

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

property inbound_nodes#

Return Functional API nodes upstream of this layer.

property input#

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask#

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape#

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec#

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile#

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers#
load_own_variables(store)#

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)#

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

property losses#

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

make_predict_function(force=False)#

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)#

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)#

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

property metrics#

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names#

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name#

Name of the layer (string), set in the constructor.

property name_scope#

Returns a tf.name_scope instance for this class.

property non_trainable_variables#

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights#

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes#

Return Functional API nodes downstream of this layer.

property output#

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask#

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape#

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)#

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)#

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)#

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)#

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

reset_metrics()#

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()#
property run_eagerly#

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

save(filepath, overwrite=True, save_format=None, **kwargs)#

Saves a model as a TensorFlow SavedModel or HDF5 file.

See the [Serialization and Saving guide](

https://keras.io/guides/serialization_and_saving/) for details.

Args:

model: Keras model instance to be saved. filepath: str or pathlib.Path object. Path where to save the

model.

overwrite: Whether we should overwrite any existing model at the

target location, or instead ask the user via an interactive prompt.

save_format: Either “keras”, “tf”, “h5”,

indicating whether to save the model in the native Keras format (.keras), in the TensorFlow SavedModel format (referred to as “SavedModel” below), or in the legacy HDF5 format (.h5). Defaults to “tf” in TF 2.X, and “h5” in TF 1.X.

SavedModel format arguments:
include_optimizer: Only applied to SavedModel and legacy HDF5

formats. If False, do not save the optimizer state. Defaults to True.

signatures: Only applies to SavedModel format. Signatures to save

with the SavedModel. See the signatures argument in tf.saved_model.save for details.

options: Only applies to SavedModel format.

tf.saved_model.SaveOptions object that specifies SavedModel saving options.

save_traces: Only applies to SavedModel format. When enabled, the

SavedModel will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored. Defaults to True. Disabling this will decrease serialization time and reduce file size, but it requires that all custom layers/models implement a get_config() method.

Example:

```python model = tf.keras.Sequential([

tf.keras.layers.Dense(5, input_shape=(3,)), tf.keras.layers.Softmax()])

model.save(“model.keras”) loaded_model = tf.keras.models.load_model(“model.keras”) x = tf.random.uniform((10, 3)) assert np.allclose(model.predict(x), loaded_model.predict(x)) ```

Note that model.save() is an alias for tf.keras.models.save_model().

save_own_variables(store)#

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)#

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})#

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)#

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)#

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

property state_updates#

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful#
property steps_per_execution#

Settable `steps_per_execution variable. Requires a compiled model.

property submodules#

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)#

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

property supports_masking#

Whether this layer supports computing a mask using compute_mask.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)#

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)#

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)#

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)#

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)#

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)#

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

property trainable#
property trainable_variables#

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights#

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates#
property variable_dtype#

Alias of Layer.dtype, the dtype of the weights.

property variables#

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights#

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

classmethod with_name_scope(method)#

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

bayesflow.summary_networks.warn(/, message, category=None, stacklevel=1, source=None)#

Issue a warning, or maybe ignore it or raise an exception.